A Comparative Study of the MPI Communication Primitives on a Cluster

نویسندگان

  • Arnab Sinha
  • Nabanita Das
چکیده

MPI (Message Passing Interface) has become the de facto standard for implementing parallel programs on distributed systems. In MPI, the two basic communication primitives are pointto-point communication and broadcast respectively. In this paper, we evaluate and compare the performance of broadcast with point-to-point communication (both blocking and non-blocking) of the MPI-1 standard library on a cluster computer in communicating the same data block among all processors. The performance in terms of delay is compared by varying the number of processors, and the data block size. The tool Jumpshot-4 is used for detailed measurement of the performance of MPI communications routines.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parleda: a Library for Parallel Processing in Computational Geometry Applications

ParLeda is a software library that provides the basic primitives needed for parallel implementation of computational geometry applications. It can also be used in implementing a parallel application that uses geometric data structures. The parallel model that we use is based on a new heterogeneous parallel model named HBSP, which is based on BSP and is introduced here. ParLeda uses two main lib...

متن کامل

Coprocessor design to support MPI primitives in configurable multiprocessors

The Message Passing Interface (MPI) is a widely used standard for interprocessor communications in parallel computers and PC clusters. Its functions are normally implemented in software due to their enormity and complexity, thus resulting in large communication latencies. Limited hardware support for MPI is sometimes available in expensive systems. Reconfigurable computing has recently reached ...

متن کامل

Remote memory access: A case for portable, efficient and library independent parallel programming

In this work we make a strong case for remote memory access (RMA) as the only way to program a parallel computer by proposing a framework that supports RMA in a library independent way. If one uses our approach the parallel code one writes will run transparently under MPI-2 enabled libraries but also bulk-synchronous parallel libraries. The advantage of using RMA is code simplicity, reduced pro...

متن کامل

CoreGRID Workshop on Grid Systems ,

While MPI became a standard on programming model for developing cluster applications, so far none of the many programming models proposed for Grids have reached such status. For this reason, using MPI in Grids is being considered a good cost-effective solution. Different from other related works that mainly intend to ease the execution of MPI processes in Grids, we understand that for obtaining...

متن کامل

Parallel computing using MPI and OpenMP on self-configured platform, UMZHPC.

Parallel computing is a topic of interest for a broad scientific community since it facilitates many time-consuming algorithms in different application domains.In this paper, we introduce a novel platform for parallel computing by using MPI and OpenMP programming languages based on set of networked PCs. UMZHPC is a free Linux-based parallel computing infrastructure that has been developed to cr...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008